回归是监督学习中解决的核心问题之一。整流的线性单元(Relu)神经网络生成连续和分段线性(CPWL)映射,是解决回归问题的最新方法。在本文中,我们提出了一种利用CPWL函数表达的替代方法。与深度神经网络相反,我们的CPWL参数化保证了稳定性,并且是可解释的。我们的方法依赖于通过Delaunay三角剖分对CPWL函数域的分配。三角剖分的顶点的功能值是我们的可学习参数,并独特地识别CPWL函数。将学习方案制定为变分问题,我们使用Hessian总变异(HTV)作为常规器来偏爱几乎没有仿射的CPWL功能。这样,我们通过单个超参数控制模型的复杂性。通过开发一个计算框架来计算通过三角剖分参数参数的任何CPWL函数的HTV,我们将学习问题离散为一般的绝对收缩和选择操作员(LASSO)。我们的实验验证了在低维情况下我们的方法的使用。
translated by 谷歌翻译
许多前馈神经网络会产生连续和分段线性(CPWL)映射。具体而言,它们将输入域分配给映射为仿射函数的区域。这些所谓的线性区域的数量提供了自然度量标准,可以表征CPWL映射的表现力。尽管该数量的精确确定通常是无法触及的,但已经针对包括众所周知的Relu和Maxout网络提出了界限。在这项工作中,我们提出了一个更一般的观点,并基于三种表达能力来源:深度,宽度和激活复杂性,就CPWL网络的最大线性区域数量提供精确的界限。我们的估计依赖于凸形分区的组合结构,并突出了深度的独特作用,该作用本身能够呈指数级增加区域数量。然后,我们引入了一个互补的随机框架,以估计CPWL网络体系结构产生的线性区域的平均数量。在合理的假设下,沿任何一维路径的线性区域的预期密度都受深度,宽度和激活复杂度度量(最高缩放系数)的量的限制。这对三种表达能力产生了相同的作用:不再观察到深度的指数增长。
translated by 谷歌翻译
Machine learning model development and optimisation can be a rather cumbersome and resource-intensive process. Custom models are often more difficult to build and deploy, and they require infrastructure and expertise which are often costly to acquire and maintain. Machine learning product development lifecycle must take into account the need to navigate the difficulties of developing and deploying machine learning models. evoML is an AI-powered tool that provides automated functionalities in machine learning model development, optimisation, and model code optimisation. Core functionalities of evoML include data cleaning, exploratory analysis, feature analysis and generation, model optimisation, model evaluation, model code optimisation, and model deployment. Additionally, a key feature of evoML is that it embeds code and model optimisation into the model development process, and includes multi-objective optimisation capabilities.
translated by 谷歌翻译
The Graph Protocol indexes historical blockchain transaction data and makes it available for querying. As the protocol is decentralized, there are many independent Indexers that index and compete with each other for serving queries to the Consumers. One dimension along which Indexers compete is pricing. In this paper, we propose a bandit-based algorithm for maximization of Indexers' revenue via Consumer budget discovery. We present the design and the considerations we had to make for a dynamic pricing algorithm being used by multiple agents simultaneously. We discuss the results achieved by our dynamic pricing bandits both in simulation and deployed into production on one of the Indexers operating on Ethereum. We have open-sourced both the simulation framework and tools we created, which other Indexers have since started to adapt into their own workflows.
translated by 谷歌翻译
This thesis introduces quantum natural language processing (QNLP) models based on a simple yet powerful analogy between computational linguistics and quantum mechanics: grammar as entanglement. The grammatical structure of text and sentences connects the meaning of words in the same way that entanglement structure connects the states of quantum systems. Category theory allows to make this language-to-qubit analogy formal: it is a monoidal functor from grammar to vector spaces. We turn this abstract analogy into a concrete algorithm that translates the grammatical structure onto the architecture of parameterised quantum circuits. We then use a hybrid classical-quantum algorithm to train the model so that evaluating the circuits computes the meaning of sentences in data-driven tasks. The implementation of QNLP models motivated the development of DisCoPy (Distributional Compositional Python), the toolkit for applied category theory of which the first chapter gives a comprehensive overview. String diagrams are the core data structure of DisCoPy, they allow to reason about computation at a high level of abstraction. We show how they can encode both grammatical structures and quantum circuits, but also logical formulae, neural networks or arbitrary Python code. Monoidal functors allow to translate these abstract diagrams into concrete computation, interfacing with optimised task-specific libraries. The second chapter uses DisCopy to implement QNLP models as parameterised functors from grammar to quantum circuits. It gives a first proof-of-concept for the more general concept of functorial learning: generalising machine learning from functions to functors by learning from diagram-like data. In order to learn optimal functor parameters via gradient descent, we introduce the notion of diagrammatic differentiation: a graphical calculus for computing the gradients of parameterised diagrams.
translated by 谷歌翻译
As aerial robots are tasked to navigate environments of increased complexity, embedding collision tolerance in their design becomes important. In this survey we review the current state-of-the-art within the niche field of collision-tolerant micro aerial vehicles and present different design approaches identified in the literature, as well as methods that have focused on autonomy functionalities that exploit collision resilience. Subsequently, we discuss the relevance to biological systems and provide our view on key directions of future fruitful research.
translated by 谷歌翻译
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy's performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
translated by 谷歌翻译
Compared with model-based control and optimization methods, reinforcement learning (RL) provides a data-driven, learning-based framework to formulate and solve sequential decision-making problems. The RL framework has become promising due to largely improved data availability and computing power in the aviation industry. Many aviation-based applications can be formulated or treated as sequential decision-making problems. Some of them are offline planning problems, while others need to be solved online and are safety-critical. In this survey paper, we first describe standard RL formulations and solutions. Then we survey the landscape of existing RL-based applications in aviation. Finally, we summarize the paper, identify the technical gaps, and suggest future directions of RL research in aviation.
translated by 谷歌翻译
Recent mean field interpretations of learning dynamics in over-parameterized neural networks offer theoretical insights on the empirical success of first order optimization algorithms in finding global minima of the nonconvex risk landscape. In this paper, we explore applying mean field learning dynamics as a computational algorithm, rather than as an analytical tool. Specifically, we design a Sinkhorn regularized proximal algorithm to approximate the distributional flow from the learning dynamics in the mean field regime over weighted point clouds. In this setting, a contractive fixed point recursion computes the time-varying weights, numerically realizing the interacting Wasserstein gradient flow of the parameter distribution supported over the neuronal ensemble. An appealing aspect of the proposed algorithm is that the measure-valued recursions allow meshless computation. We demonstrate the proposed computational framework of interacting weighted particle evolution on binary and multi-class classification. Our algorithm performs gradient descent of the free energy associated with the risk functional.
translated by 谷歌翻译
语义搜索是一项重要的任务,目的是从数据库中找到相关索引以进行查询。它需要一个可以正确学习句子语义的检索模型。基于变压器的模型由于其出色的学习语义表示能力而被广泛用作检索模型。同时,还提出了许多适合它们的正则化方法。在本文中,我们提出了一种新的正则化方法:正则化对比度学习,可以帮助基于变压器的模型学习更好地表示句子。首先,它为每个句子增强了几个不同的语义表示,然后将它们作为监管机构的对比目标。这些对比调节器可以克服过度拟合的问题并减轻各向异性问题。我们首先使用优于预训练的模型Sroberta对7个语义搜索基准测试进行评估。结果表明,我们的方法更有效地学习了出色的句子表示。然后,我们评估具有长期查询和索引的2个具有挑战性的FAQ数据集,咳嗽和FAQIR。我们的实验结果表明,我们的方法表现优于基线方法。
translated by 谷歌翻译